Goto

Collaborating Authors

 problem 1


Separating Oblivious and Adaptive Models of Variable Selection

Chen, Ziyun, Li, Jerry, Tian, Kevin, Zhu, Yusong

arXiv.org Machine Learning

Sparse recovery is among the most well-studied problems in learning theory and high-dimensional statistics. In this work, we investigate the statistical and computational landscapes of sparse recovery with $\ell_\infty$ error guarantees. This variant of the problem is motivated by \emph{variable selection} tasks, where the goal is to estimate the support of a $k$-sparse signal in $\mathbb{R}^d$. Our main contribution is a provable separation between the \emph{oblivious} (``for each'') and \emph{adaptive} (``for all'') models of $\ell_\infty$ sparse recovery. We show that under an oblivious model, the optimal $\ell_\infty$ error is attainable in near-linear time with $\approx k\log d$ samples, whereas in an adaptive model, $\gtrsim k^2$ samples are necessary for any algorithm to achieve this bound. This establishes a surprising contrast with the standard $\ell_2$ setting, where $\approx k \log d$ samples suffice even for adaptive sparse recovery. We conclude with a preliminary examination of a \emph{partially-adaptive} model, where we show nontrivial variable selection guarantees are possible with $\approx k\log d$ measurements.



Efficient Convex Relaxations for Streaming PCA

Raman Arora, Teodor Vanislavov Marinov

Neural Information Processing Systems

Theorem 4.2.Thefollowingholdsfor Algorithm 2: withprobabilityatleast1 , forallt T hP Pt,Ci 32 log ( 3e / ) ( C)2 t+ 1 1 , where = (C) Theempirical implementation condition allowsusCt, with specified components, 7 1: Experimentsonsyntheticdata.



Quantum speedups for stochastic optimization

Neural Information Processing Systems

We consider the problem of minimizing a continuous function given given access to a natural quantum generalization of a stochastic gradient oracle. We provide two new methods for the special case of minimizing a Lipschitz convex function. Each method obtains a dimension versus accuracy trade-off which is provably unachievable classically and we prove that one method is asymptotically optimal in low-dimensional settings. Additionally, we provide quantum algorithms for computing a critical point of a smooth non-convex function at rates not known to be achievable classically. To obtain these results we build upon the quantum multivariate mean estimation result of Cornelissen et al. [25] and provide a general quantum variance reduction technique of independent interest.